My writing on Kolmogorov Complexity:

Kolmogorov Complexity - it's a bit silly

More on Kolmogorov Complexity


Thoughts from others:

Yann LeCun (Turning Award winner) on Kolmogorov complexity, from a great podcast conversation with Lex Fridman:

"It's how you measure complexity - we don't actually have good ways of measuring, or at least we don't have good ways of interpreting the measures that we have at our disposal, like how do you measure the complexity of something?

So there's all those things like Kolmogorov-Chaitin-Solomonoff complexity - the length of the shortest program that would generate a bit string can be thought of as the complexity of that bit string.

I've been fascinated by that concept. The problem with that, is that that complexity is defined up to a constant which can be very large. There are similar concepts that are derived from Bayesian probability theory, where the complexity of something is the negative log of its probability essentially, and you have a complete equivalence between the two things, and there you would think the probability is something that's well defined mathematically, which means complexity is well defined, but it's not true! You need to have a model of the distribution and you may need to have a prior if you are doing Bayesian inference, and the prior plays the same role as the choice of the computer with which you measure Kolmogorov complexity, and so every measure of complexity we have has some arbitrariness in it - an additive constant which can be arbitrarily large, and so how can we come up with a good theory of how things become more complex if we don't have a good measure of complexity?"

Murray Gell-Mann (Nobel Prize winner and discoverer/inventor/theorist of the Quark):

We have seen that subjectivity or arbitrariness is inherent in the definition of crude complexity, arising from such sources as coarse graining and the language used to describe the system. in AIC [Algorithmic information content = Kolmogorov complexity], additional sources of arbitrariness have been introduced, namely the particular coding procedure that turns the description of the system into a bit string, as well as the particular hardware and software associated with the computer.

None of this arbitrariness bothers the mathematical information theorists very much, because they are usually concerned with limits in which finite arbitrariness becomes comparatively insignificant. They like to consider sequences of bit strings of increasing length, studying how the AIC behaves as the length approaches infinity. (That is reminiscent of how computer scientists like to treat the computation complexity of a sequence of similar problems as the problem size approaches infinity.)

...

We can learn an interesting lesson from those theorists. Even if we don't confine ourselves to systems that become infinitely large, it is important to understand that discussions of simplicity and complexity tend to become more and more meaningful as the bit strings become longer and longer. At the other extreme, say for a string of one bit, it is evidently meaningless to differentiate between simplicity and complexity.

Murray Gell-Mann The Quark and the Jaguar - Adventures in the Simple and the Complex, Chapter 1 ('The Simple and the Complex'), page 36